Making machine learning robust against adversarial inputs
نویسندگان
چکیده
منابع مشابه
Networking the Boids is More Robust Against Adversarial Learning
Swarm behavior using Boids-like models has been studied primarily using close-proximity spatial sensory information (e.g. vision range). In this study, we propose a novel approach in which the classic definition of boids’ neighborhood that relies on sensory perception and Euclidian space locality is replaced with graph-theoretic network-based proximity mimicking communication and social network...
متن کاملRobust Adversarial Reinforcement Learning
Deep neural networks coupled with fast simulation and improved computation have led to recent successes in the field of reinforcement learning (RL). However, most current RL-based approaches fail to generalize since: (a) the gap between simulation and real world is so large that policy-learning approaches fail to transfer; (b) even if policy learning is done in real world, the data scarcity lea...
متن کاملLearning with stochastic inputs and adversarial outputs
Most of the research in online learning is focused either on the problem of adversarial classification (i.e., both inputs and labels are arbitrarily chosen by an adversary) or on the traditional supervised learning problem in which samples are independent and identically distributed according to a stationary probability distribution. Nonetheless, in a number of domains the relationship between ...
متن کاملFoundations of Adversarial Machine Learning
As classifiers are deployed to detect malicious behavior ranging from spam to terrorism, adversaries modify their behaviors to avoid detection (e.g., [4, 3, 6]). This makes the very behavior the classifier is trying to detect a function of the classifier itself. Learners that account for concept drift (e.g., [5]) are not sufficient since they do not allow the change in concept to depend on the ...
متن کاملAdversarial Machine Learning at Scale
Adversarial examples are malicious inputs designed to fool machine learning models. They often transfer from one model to another, allowing attackers to mount black box attacks without knowledge of the target model’s parameters. Adversarial training is the process of explicitly training a model on adversarial examples, in order to make it more robust to attack or to reduce its test error on cle...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Communications of the ACM
سال: 2018
ISSN: 0001-0782,1557-7317
DOI: 10.1145/3134599